107 research outputs found

    We favor formal models of heuristics rather than lists of loose dichotomies: a reply to Evans and Over

    Get PDF
    In their comment on Marewski et al. (good judgments do not require complex cognition, 2009) Evans and Over (heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer, 2009) conjectured that heuristics can often lead to biases and are not error free. This is a most surprising critique. The computational models of heuristics we have tested allow for quantitative predictions of how many errors a given heuristic will make, and we and others have measured the amount of error by analysis, computer simulation, and experiment. This is clear progress over simply giving heuristics labels, such as availability, that do not allow for quantitative comparisons of errors. Evans and Over argue that the reason people rely on heuristics is the accuracy-effort trade-off. However, the comparison between heuristics and more effortful strategies, such as multiple regression, has shown that there are many situations in which a heuristic is more accurate with less effort. Finally, we do not see how the fast and frugal heuristics program could benefit from a dual-process framework unless the dual-process framework is made more precise. Instead, the dual-process framework could benefit if its two “black boxes” (Type 1 and Type 2 processes) were substituted by computational models of both heuristics and other processes

    Spontaneous and deliberate future thinking: A dual process account

    Get PDF
    © 2019 Springer Nature.This is the final published version of an article published in Psychological Research, licensed under a Creative Commons Attri-bution 4.0 International License. Available online at: https://doi.org/10.1007/s00426-019-01262-7.In this article, we address an apparent paradox in the literature on mental time travel and mind-wandering: How is it possible that future thinking is both constructive, yet often experienced as occurring spontaneously? We identify and describe two ‘routes’ whereby episodic future thoughts are brought to consciousness, with each of the ‘routes’ being associated with separable cognitive processes and functions. Voluntary future thinking relies on controlled, deliberate and slow cognitive processing. The other, termed involuntary or spontaneous future thinking, relies on automatic processes that allows ‘fully-fledged’ episodic future thoughts to freely come to mind, often triggered by internal or external cues. To unravel the paradox, we propose that the majority of spontaneous future thoughts are ‘pre-made’ (i.e., each spontaneous future thought is a re-iteration of a previously constructed future event), and therefore based on simple, well-understood, memory processes. We also propose that the pre-made hypothesis explains why spontaneous future thoughts occur rapidly, are similar to involuntary memories, and predominantly about upcoming tasks and goals. We also raise the possibility that spontaneous future thinking is the default mode of imagining the future. This dual process approach complements and extends standard theoretical approaches that emphasise constructive simulation, and outlines novel opportunities for researchers examining voluntary and spontaneous forms of future thinking.Peer reviewe

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    A habituation account of change detection in same/different judgments

    Get PDF
    We investigated the basis of change detection in a short-term priming task. In two experiments, participants were asked to indicate whether or not a target word was the same as a previously presented cue. Data from an experiment measuring magnetoencephalography failed to find different patterns for “same” and “different” responses, consistent with the claim that both arise from a common neural source, with response magnitude defining the difference between immediate novelty versus familiarity. In a behavioral experiment, we tested and confirmed the predictions of a habituation account of these judgments by comparing conditions in which the target, the cue, or neither was primed by its presentation in the previous trial. As predicted, cue-primed trials had faster response times, and target-primed trials had slower response times relative to the neither-primed baseline. These results were obtained irrespective of response repetition and stimulus–response contingencies. The behavioral and brain activity data support the view that detection of change drives performance in these tasks and that the underlying mechanism is neuronal habituation

    The Impact of Fillers on Lineup Performance

    Get PDF
    Filler siphoning theory posits that the presence of fillers (known innocents) in a lineup protects an innocent suspect from being chosen by siphoning choices away from that innocent suspect. This mechanism has been proposed as an explanation for why simultaneous lineups (viewing all lineup members at once) induces better performance than showups (one-person identification procedures). We implemented filler siphoning in a computational model (WITNESS, Clark, Applied Cognitive Psychology 17:629–654, 2003), and explored the impact of the number of fillers (lineup size) and filler quality on simultaneous and sequential lineups (viewing lineups members in sequence), and compared both to showups. In limited situations, we found that filler siphoning can produce a simultaneous lineup performance advantage, but one that is insufficient in magnitude to explain empirical data. However, the magnitude of the empirical simultaneous lineup advantage can be approximated once criterial variability is added to the model. But this modification works by negatively impacting showups rather than promoting more filler siphoning. In sequential lineups, fillers were found to harm performance. Filler siphoning fails to clarify the relationship between simultaneous lineups and sequential lineups or showups. By incorporating constructs like filler siphoning and criterial variability into a computational model, and trying to approximate empirical data, we can sort through explanations of eyewitness decision-making, a prerequisite for policy recommendations.Charges for publication of this article sponsored by University of Oklahoma Libraries Open Access/Subvention Fund.Ye

    Episodic Memory and Appetite Regulation in Humans

    Get PDF
    Psychological and neurobiological evidence implicates hippocampal-dependent memory processes in the control of hunger and food intake. In humans, these have been revealed in the hyperphagia that is associated with amnesia. However, it remains unclear whether 'memory for recent eating' plays a significant role in neurologically intact humans. In this study we isolated the extent to which memory for a recently consumed meal influences hunger and fullness over a three-hour period. Before lunch, half of our volunteers were shown 300 ml of soup and half were shown 500 ml. Orthogonal to this, half consumed 300 ml and half consumed 500 ml. This process yielded four separate groups (25 volunteers in each). Independent manipulation of the 'actual' and 'perceived' soup portion was achieved using a computer-controlled peristaltic pump. This was designed to either refill or draw soup from a soup bowl in a covert manner. Immediately after lunch, self-reported hunger was influenced by the actual and not the perceived amount of soup consumed. However, two and three hours after meal termination this pattern was reversed - hunger was predicted by the perceived amount and not the actual amount. Participants who thought they had consumed the larger 500-ml portion reported significantly less hunger. This was also associated with an increase in the 'expected satiation' of the soup 24-hours later. For the first time, this manipulation exposes the independent and important contribution of memory processes to satiety. Opportunities exist to capitalise on this finding to reduce energy intake in humans

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure
    corecore